Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 35
Filter
1.
Comput Intell Neurosci ; 2023: 1305583, 2023.
Article in English | MEDLINE | ID: covidwho-2194246

ABSTRACT

Diabetic retinopathy (DR) is a common retinal vascular disease, which can cause severe visual impairment. It is of great clinical significance to use fundus images for intelligent diagnosis of DR. In this paper, an intelligent DR classification model of fundus images is proposed. This method can detect all the five stages of DR, including of no DR, mild, moderate, severe, and proliferative. This model is composed of two key modules. FEB, feature extraction block, is mainly used for feature extraction of fundus images, and GPB, grading prediction block, is used to classify the five stages of DR. The transformer in the FEB has more fine-grained attention that can pay more attention to retinal hemorrhage and exudate areas. The residual attention in the GPB can effectively capture different spatial regions occupied by different classes of objects. Comprehensive experiments on DDR datasets well demonstrate the superiority of our method, and compared with the benchmark method, our method has achieved competitive performance.


Subject(s)
Diabetes Mellitus , Diabetic Retinopathy , Humans , Diabetic Retinopathy/diagnostic imaging , Fundus Oculi , Image Interpretation, Computer-Assisted/methods , Benchmarking
2.
Sci Rep ; 12(1): 1716, 2022 02 02.
Article in English | MEDLINE | ID: covidwho-1900583

ABSTRACT

The rapid evolution of the novel coronavirus disease (COVID-19) pandemic has resulted in an urgent need for effective clinical tools to reduce transmission and manage severe illness. Numerous teams are quickly developing artificial intelligence approaches to these problems, including using deep learning to predict COVID-19 diagnosis and prognosis from chest computed tomography (CT) imaging data. In this work, we assess the value of aggregated chest CT data for COVID-19 prognosis compared to clinical metadata alone. We develop a novel patient-level algorithm to aggregate the chest CT volume into a 2D representation that can be easily integrated with clinical metadata to distinguish COVID-19 pneumonia from chest CT volumes from healthy participants and participants with other viral pneumonia. Furthermore, we present a multitask model for joint segmentation of different classes of pulmonary lesions present in COVID-19 infected lungs that can outperform individual segmentation models for each task. We directly compare this multitask segmentation approach to combining feature-agnostic volumetric CT classification feature maps with clinical metadata for predicting mortality. We show that the combination of features derived from the chest CT volumes improve the AUC performance to 0.80 from the 0.52 obtained by using patients' clinical data alone. These approaches enable the automated extraction of clinically relevant features from chest CT volumes for risk stratification of COVID-19 patients.


Subject(s)
COVID-19/diagnosis , COVID-19/virology , Deep Learning , SARS-CoV-2 , Thorax/diagnostic imaging , Thorax/pathology , Tomography, X-Ray Computed , Algorithms , COVID-19/mortality , Databases, Genetic , Humans , Image Interpretation, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Prognosis , Tomography, X-Ray Computed/methods , Tomography, X-Ray Computed/standards
3.
AJR Am J Roentgenol ; 218(2): 270-278, 2022 02.
Article in English | MEDLINE | ID: covidwho-1793148

ABSTRACT

BACKGROUND. The need for second visits between screening mammography and diagnostic imaging contributes to disparities in the time to breast cancer diagnosis. During the COVID-19 pandemic, an immediate-read screening mammography program was implemented to reduce patient visits and decrease time to diagnostic imaging. OBJECTIVE. The purpose of this study was to measure the impact of an immediate-read screening program with focus on disparities in same-day diagnostic imaging after abnormal findings are made at screening mammography. METHODS. In May 2020, an immediate-read screening program was implemented whereby a dedicated breast imaging radiologist interpreted all screening mammograms in real time; patients received results before discharge; and efforts were made to perform any recommended diagnostic imaging during the visit (performed by different radiologists). Screening mammographic examinations performed from June 1, 2019, through October 31, 2019 (preimplementation period), and from June 1, 2020, through October 31, 2020 (postimplementation period), were retrospectively identified. Patient characteristics were recorded from the electronic medical record. Multivariable logistic regression models incorporating patient age, race and ethnicity, language, and insurance type were estimated to identify factors associated with same-day diagnostic imaging. Screening metrics were compared between periods. RESULTS. A total of 8222 preimplementation and 7235 postimplementation screening examinations were included; 521 patients had abnormal screening findings before implementation, and 359 after implementation. Before implementation, 14.8% of patients underwent same-day diagnostic imaging after abnormal screening mammograms. This percentage increased to 60.7% after implementation. Before implementation, patients who identified their race as other than White had significantly lower odds than patients who identified their race as White of undergoing same-day diagnostic imaging after receiving abnormal screening results (adjusted odds ratio, 0.30; 95% CI, 0.10-0.86; p = .03). After implementation, the odds of same-day diagnostic imaging were not significantly different between patients of other races and White patients (adjusted odds ratio, 0.92; 95% CI, 0.50-1.71; p = .80). After implementation, there was no significant difference in race and ethnicity between patients who underwent and those who did not undergo same-day diagnostic imaging after receiving abnormal results of screening mammography (p > .05). The rate of abnormal interpretation was significantly lower after than it was before implementation (5.0% vs 6.3%; p < .001). Cancer detection rate and PPV1 (PPV based on positive findings at screening examination) were not significantly different before and after implementation (p > .05). CONCLUSION. Implementation of the immediate-read screening mammography program reduced prior racial and ethnic disparities in same-day diagnostic imaging after abnormal screening mammograms. CLINICAL IMPACT. An immediate-read screening program provides a new paradigm for improved screening mammography workflow that allows more rapid diagnostic workup with reduced disparities in care.


Subject(s)
Breast Neoplasms/diagnostic imaging , COVID-19/prevention & control , Delayed Diagnosis/prevention & control , Healthcare Disparities/statistics & numerical data , Image Interpretation, Computer-Assisted/methods , Mammography/methods , Racial Groups/statistics & numerical data , Adult , Breast/diagnostic imaging , Female , Humans , Middle Aged , Pandemics , Retrospective Studies , SARS-CoV-2 , Time
4.
Sensors (Basel) ; 22(6)2022 Mar 18.
Article in English | MEDLINE | ID: covidwho-1765834

ABSTRACT

Blood cancer, or leukemia, has a negative impact on the blood and/or bone marrow of children and adults. Acute lymphocytic leukemia (ALL) and acute myeloid leukemia (AML) are two sub-types of acute leukemia. The Internet of Medical Things (IoMT) and artificial intelligence have allowed for the development of advanced technologies to assist in recently introduced medical procedures. Hence, in this paper, we propose a new intelligent IoMT framework for the automated classification of acute leukemias using microscopic blood images. The workflow of our proposed framework includes three main stages, as follows. First, blood samples are collected by wireless digital microscopy and sent to a cloud server. Second, the cloud server carries out automatic identification of the blood conditions-either leukemias or healthy-utilizing our developed generative adversarial network (GAN) classifier. Finally, the classification results are sent to a hematologist for medical approval. The developed GAN classifier was successfully evaluated on two public data sets: ALL-IDB and ASH image bank. It achieved the best accuracy scores of 98.67% for binary classification (ALL or healthy) and 95.5% for multi-class classification (ALL, AML, and normal blood cells), when compared with existing state-of-the-art methods. The results of this study demonstrate the feasibility of our proposed IoMT framework for automated diagnosis of acute leukemia tests. Clinical realization of this blood diagnosis system is our future work.


Subject(s)
Internet of Things , Leukemia , Algorithms , Artificial Intelligence , Child , Humans , Image Interpretation, Computer-Assisted/methods
5.
PLoS One ; 17(1): e0262052, 2022.
Article in English | MEDLINE | ID: covidwho-1643253

ABSTRACT

The COVID-19 epidemic has a catastrophic impact on global well-being and public health. More than 27 million confirmed cases have been reported worldwide until now. Due to the growing number of confirmed cases, and challenges to the variations of the COVID-19, timely and accurate classification of healthy and infected patients is essential to control and treat COVID-19. We aim to develop a deep learning-based system for the persuasive classification and reliable detection of COVID-19 using chest radiography. Firstly, we evaluate the performance of various state-of-the-art convolutional neural networks (CNNs) proposed over recent years for medical image classification. Secondly, we develop and train CNN from scratch. In both cases, we use a public X-Ray dataset for training and validation purposes. For transfer learning, we obtain 100% accuracy for binary classification (i.e., Normal/COVID-19) and 87.50% accuracy for tertiary classification (Normal/COVID-19/Pneumonia). With the CNN trained from scratch, we achieve 93.75% accuracy for tertiary classification. In the case of transfer learning, the classification accuracy drops with the increased number of classes. The results are demonstrated by comprehensive receiver operating characteristics (ROC) and confusion metric analysis with 10-fold cross-validation.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Image Interpretation, Computer-Assisted/methods , Pneumonia, Bacterial/diagnostic imaging , COVID-19/pathology , COVID-19/virology , Case-Control Studies , Databases, Factual , Diagnosis, Differential , Female , Humans , Male , Pneumonia, Bacterial/pathology , Pneumonia, Bacterial/virology , ROC Curve , Radiography, Thoracic , SARS-CoV-2/pathogenicity
6.
Comput Math Methods Med ; 2021: 8081276, 2021.
Article in English | MEDLINE | ID: covidwho-1435106

ABSTRACT

The use of Internet technology has led to the availability of different multimedia data in various formats. The unapproved customers misuse multimedia information by conveying them on various web objections to acquire cash deceptively without the first copyright holder's intervention. Due to the rise in cases of COVID-19, lots of patient information are leaked without their knowledge, so an intelligent technique is required to protect the integrity of patient data by placing an invisible signal known as a watermark on the medical images. In this paper, a new method of watermarking is proposed on both standard and medical images. The paper addresses the use of digital rights management in medical field applications such as embedding the watermark in medical images related to neurodegenerative disorders, lung disorders, and heart issues. The various quality parameters are used to figure out the evaluation of the developed method. In addition, the testing of the watermarking scheme is done by applying various signal processing attacks.


Subject(s)
COVID-19/diagnostic imaging , Computer Security , Neurodegenerative Diseases/diagnostic imaging , Neurodegenerative Diseases/genetics , Algorithms , Computational Biology/methods , Humans , Image Interpretation, Computer-Assisted/methods , Internet , Models, Statistical
7.
IEEE J Transl Eng Health Med ; 9: 1800209, 2021.
Article in English | MEDLINE | ID: covidwho-1388111

ABSTRACT

Background: Accurate and fast diagnosis of COVID-19 is very important to manage the medical conditions of affected persons. The task is challenging owing to shortage and ineffectiveness of clinical testing kits. However, the existing problems can be improved by employing computational intelligent techniques on radiological images like CT-Scans (Computed Tomography) of lungs. Extensive research has been reported using deep learning models to diagnose the severity of COVID-19 from CT images. This has undoubtedly minimized the manual involvement in abnormality identification but reported detection accuracy is limited. Methods: The present work proposes an expert model based on deep features and Parameter Free BAT (PF-BAT) optimized Fuzzy K-nearest neighbor (PF-FKNN) classifier to diagnose novel coronavirus. In this proposed model, features are extracted from the fully connected layer of transfer learned MobileNetv2 followed by FKNN training. The hyperparameters of FKNN are fine-tuned using PF-BAT. Results: The experimental results on the benchmark COVID CT scan data reveal that the proposed algorithm attains a validation accuracy of 99.38% which is better than the existing state-of-the-art methods proposed in past. Conclusion: The proposed model will help in timely and accurate identification of the coronavirus at the various phases. Such kind of rapid diagnosis will assist clinicians to manage the healthcare condition of patients well and will help in speedy recovery from the diseases. Clinical and Translational Impact Statement - The proposed automated system can provide accurate and fast detection of COVID-19 signature from lung radiographs. Also, the usage of lighter MobileNetv2 architecture makes it practical for deployment in real-time.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Image Interpretation, Computer-Assisted/methods , Algorithms , Female , Humans , Lung/diagnostic imaging , Male , SARS-CoV-2 , Tomography, X-Ray Computed
8.
Ann Diagn Pathol ; 54: 151807, 2021 Oct.
Article in English | MEDLINE | ID: covidwho-1356125

ABSTRACT

Digital pathology has become an integral part of pathology education in recent years, particularly during the COVID-19 pandemic, for its potential utility as a teaching tool that augments the traditional 1-to-1 sign-out experience. Herein, we evaluate the utility of whole slide imaging (WSI) in reducing diagnostic errors in pigmented cutaneous lesions by pathology fellows without subspecialty training in dermatopathology. Ten cases of 4 pigmented cutaneous lesions commonly encountered by general pathologists were selected. Corresponding whole slide images were distributed to our fellows, along with two sets of online surveys, each composed of 10 multiple-choice questions with 4 answers. Identical cases were used for both surveys to minimize variability in trainees' scores depending on the perceived level of difficulty, with the second set being distributed after random shuffling. Brief image-based teaching slides as self-assessment tool were provided to trainees between each survey. Pre- and post-self-assessment scores were analyzed. 61% (17/28) and 39% (11/28) of fellows completed the first and second surveys, respectively. The mean score in the first survey was 5.2/10. The mean score in the second survey following self-assessment increased to 7.2/10. 64% (7/11) of trainees showed an improvement in their scores, with 1 trainee improving his/her score by 8 points. No fellow scored less post-self-assessment than on the initial assessment. The difference in individual scores between two surveys was statistically significant (p = 0.003). Our study demonstrates the utility of WSI-based self-assessment learning as a source of improving diagnostic skills of pathology trainees in a short period of time.


Subject(s)
COVID-19/prevention & control , Clinical Competence , Education, Distance/methods , Education, Medical, Graduate/methods , Image Interpretation, Computer-Assisted/methods , Pathology, Clinical/education , Skin Diseases/pathology , Diagnostic Errors/prevention & control , Fellowships and Scholarships , Humans , Pathology, Clinical/methods , Skin Diseases/diagnosis , United States
9.
Comput Math Methods Med ; 2021: 9998379, 2021.
Article in English | MEDLINE | ID: covidwho-1314186

ABSTRACT

In recent years, computerized biomedical imaging and analysis have become extremely promising, more interesting, and highly beneficial. They provide remarkable information in the diagnoses of skin lesions. There have been developments in modern diagnostic systems that can help detect melanoma in its early stages to save the lives of many people. There is also a significant growth in the design of computer-aided diagnosis (CAD) systems using advanced artificial intelligence. The purpose of the present research is to develop a system to diagnose skin cancer, one that will lead to a high level of detection of the skin cancer. The proposed system was developed using deep learning and traditional artificial intelligence machine learning algorithms. The dermoscopy images were collected from the PH2 and ISIC 2018 in order to examine the diagnose system. The developed system is divided into feature-based and deep leaning. The feature-based system was developed based on feature-extracting methods. In order to segment the lesion from dermoscopy images, the active contour method was proposed. These skin lesions were processed using hybrid feature extractions, namely, the Local Binary Pattern (LBP) and Gray Level Co-occurrence Matrix (GLCM) methods to extract the texture features. The obtained features were then processed using the artificial neural network (ANNs) algorithm. In the second system, the convolutional neural network (CNNs) algorithm was applied for the efficient classification of skin diseases; the CNNs were pretrained using large AlexNet and ResNet50 transfer learning models. The experimental results show that the proposed method outperformed the state-of-art methods for HP2 and ISIC 2018 datasets. Standard evaluation metrics like accuracy, specificity, sensitivity, precision, recall, and F-score were employed to evaluate the results of the two proposed systems. The ANN model achieved the highest accuracy for PH2 (97.50%) and ISIC 2018 (98.35%) compared with the CNN model. The evaluation and comparison, proposed systems for classification and detection of melanoma are presented.


Subject(s)
Diagnosis, Computer-Assisted/methods , Melanoma/diagnostic imaging , Skin Neoplasms/diagnostic imaging , Algorithms , Artificial Intelligence , Computational Biology , Databases, Factual/statistics & numerical data , Deep Learning , Dermoscopy , Diagnosis, Computer-Assisted/statistics & numerical data , Early Detection of Cancer/methods , Early Detection of Cancer/statistics & numerical data , Humans , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/statistics & numerical data , Neural Networks, Computer , Skin Diseases/classification , Skin Diseases/diagnostic imaging
10.
IEEE Trans Ultrason Ferroelectr Freq Control ; 68(7): 2507-2515, 2021 07.
Article in English | MEDLINE | ID: covidwho-1288239

ABSTRACT

As being radiation-free, portable, and capable of repetitive use, ultrasonography is playing an important role in diagnosing and evaluating the COVID-19 Pneumonia (PN) in this epidemic. By virtue of lung ultrasound scores (LUSS), lung ultrasound (LUS) was used to estimate the excessive lung fluid that is an important clinical manifestation of COVID-19 PN, with high sensitivity and specificity. However, as a qualitative method, LUSS suffered from large interobserver variations and requirement for experienced clinicians. Considering this limitation, we developed a quantitative and automatic lung ultrasound scoring system for evaluating the COVID-19 PN. A total of 1527 ultrasound images prospectively collected from 31 COVID-19 PN patients with different clinical conditions were evaluated and scored with LUSS by experienced clinicians. All images were processed via a series of computer-aided analysis, including curve-to-linear conversion, pleural line detection, region-of-interest (ROI) selection, and feature extraction. A collection of 28 features extracted from the ROI was specifically defined for mimicking the LUSS. Multilayer fully connected neural networks, support vector machines, and decision trees were developed for scoring LUS images using the fivefold cross validation. The model with 128×256 two fully connected layers gave the best accuracy of 87%. It is concluded that the proposed method could assess the ultrasound images by assigning LUSS automatically with high accuracy, potentially applicable to the clinics.


Subject(s)
COVID-19/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Lung/diagnostic imaging , Neural Networks, Computer , Ultrasonography/methods , Adult , Aged , Female , Humans , Male , Middle Aged , SARS-CoV-2
11.
IEEE Trans Ultrason Ferroelectr Freq Control ; 67(11): 2258-2264, 2020 11.
Article in English | MEDLINE | ID: covidwho-1284995

ABSTRACT

Lung ultrasound (LUS) is a practical tool for lung diagnosis when computer tomography (CT) is not available. Recent findings suggest that LUS diagnosis is highly advantageous because of its mobility and correlation with radiological findings for viral pneumonia. Simple models for both educational evaluation and technical evaluation are needed. Therefore, this work investigates the usability of a large animal model under aspects of LUS features of viral pneumonia using saline one lung flooding. Six pigs were intubated with a double-lumen tube, and the left lung was instilled with saline. During the instillation of up to 12.5 ml/kg, the sonographic features were assessed. All features present during viral pneumonia were found, such as B-lines, white lung syndrome, pleural thickening, and the formation of pleural consolidations. Sonographic findings correlate well with current LUS scores for COVID19. The scores of 1, 2, and 3 were dominantly present at 1-4-, 4-8-, and 8-12-ml/kg saline instillation, respectively. The noninfective animal model can be used for further investigation of the LUS features and can serve in education, by helping with the appropriate handling of LUS in clinical practice during management of viral pneumonia.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Lung , Pneumonia, Viral , Ultrasonography/methods , Animals , COVID-19 , Female , Lung/diagnostic imaging , Lung/pathology , Pneumonia, Viral/diagnostic imaging , Pneumonia, Viral/pathology , Swine
13.
IEEE Trans Ultrason Ferroelectr Freq Control ; 68(6): 2023-2037, 2021 06.
Article in English | MEDLINE | ID: covidwho-1243581

ABSTRACT

Lung ultrasound (US) imaging has the potential to be an effective point-of-care test for detection of COVID-19, due to its ease of operation with minimal personal protection equipment along with easy disinfection. The current state-of-the-art deep learning models for detection of COVID-19 are heavy models that may not be easy to deploy in commonly utilized mobile platforms in point-of-care testing. In this work, we develop a lightweight mobile friendly efficient deep learning model for detection of COVID-19 using lung US images. Three different classes including COVID-19, pneumonia, and healthy were included in this task. The developed network, named as Mini-COVIDNet, was bench-marked with other lightweight neural network models along with state-of-the-art heavy model. It was shown that the proposed network can achieve the highest accuracy of 83.2% and requires a training time of only 24 min. The proposed Mini-COVIDNet has 4.39 times less number of parameters in the network compared to its next best performing network and requires a memory of only 51.29 MB, making the point-of-care detection of COVID-19 using lung US imaging plausible on a mobile platform. Deployment of these lightweight networks on embedded platforms shows that the proposed Mini-COVIDNet is highly versatile and provides optimal performance in terms of being accurate as well as having latency in the same order as other lightweight networks. The developed lightweight models are available at https://github.com/navchetan-awasthi/Mini-COVIDNet.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Image Interpretation, Computer-Assisted/methods , Point-of-Care Systems , Ultrasonography/methods , Humans , SARS-CoV-2
14.
Am J Clin Pathol ; 155(5): 638-648, 2021 04 26.
Article in English | MEDLINE | ID: covidwho-1207251

ABSTRACT

OBJECTIVES: The ongoing global severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic necessitates adaptations in the practice of surgical pathology at scale. Primary diagnosis by whole-slide imaging (WSI) is a key component that would aid departments in providing uninterrupted histopathology diagnosis and maintaining revenue streams from disruption. We sought to perform rapid validation of the use of WSI in primary diagnosis meeting recommendations of the College of American Pathologists guidelines. METHODS: Glass slides from clinically reported cases from 5 participating pathologists with a preset washout period were digitally scanned and reviewed in settings identical to typical reporting. Cases were classified as concordant or with minor or major disagreement with the original diagnosis. Randomized subsampling was performed, and mean concordance rates were calculated. RESULTS: In total, 171 cases were included and distributed equally among participants. For the group as a whole, the mean concordance rate in sampled cases (n = 90) was 83.6% counting all discrepancies and 94.6% counting only major disagreements. The mean pathologist concordance rate in sampled cases (n = 18) ranged from 90.49% to 97%. CONCLUSIONS: We describe a novel double-blinded method for rapid validation of WSI for primary diagnosis. Our findings highlight the occurrence of a range of diagnostic reproducibility when deploying digital methods.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Pathology, Surgical/methods , Telepathology/methods , COVID-19/epidemiology , COVID-19/prevention & control , Double-Blind Method , Humans , Image Interpretation, Computer-Assisted/standards , Observer Variation , Pandemics/prevention & control , Pathology, Surgical/standards , Practice Guidelines as Topic , Reproducibility of Results , Retrospective Studies , Telepathology/standards
15.
Viruses ; 13(4)2021 04 02.
Article in English | MEDLINE | ID: covidwho-1167763

ABSTRACT

The visualization of cellular ultrastructure over a wide range of volumes is becoming possible by increasingly powerful techniques grouped under the rubric "volume electron microscopy" or volume EM (vEM). Focused ion beam scanning electron microscopy (FIB-SEM) occupies a "Goldilocks zone" in vEM: iterative and automated cycles of milling and imaging allow the interrogation of microns-thick specimens in 3-D at resolutions of tens of nanometers or less. This bestows on FIB-SEM the unique ability to aid the accurate and precise study of architectures of virus-cell interactions. Here we give the virologist or cell biologist a primer on FIB-SEM imaging in the context of vEM and discuss practical aspects of a room temperature FIB-SEM experiment. In an in vitro study of SARS-CoV-2 infection, we show that accurate quantitation of viral densities and surface curvatures enabled by FIB-SEM imaging reveals SARS-CoV-2 viruses preferentially located at areas of plasma membrane that have positive mean curvatures.


Subject(s)
COVID-19/pathology , Host Microbial Interactions , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Microscopy, Electron, Scanning/methods , SARS-CoV-2 , Animals , Cell Communication , Cell Membrane , Chlorocebus aethiops , Epithelial Cells/virology , Humans , Lung , Vero Cells
16.
J Healthc Eng ; 2021: 8829829, 2021.
Article in English | MEDLINE | ID: covidwho-1145382

ABSTRACT

COVID-19 has affected the whole world drastically. A huge number of people have lost their lives due to this pandemic. Early detection of COVID-19 infection is helpful for treatment and quarantine. Therefore, many researchers have designed a deep learning model for the early diagnosis of COVID-19-infected patients. However, deep learning models suffer from overfitting and hyperparameter-tuning issues. To overcome these issues, in this paper, a metaheuristic-based deep COVID-19 screening model is proposed for X-ray images. The modified AlexNet architecture is used for feature extraction and classification of the input images. Strength Pareto evolutionary algorithm-II (SPEA-II) is used to tune the hyperparameters of modified AlexNet. The proposed model is tested on a four-class (i.e., COVID-19, tuberculosis, pneumonia, or healthy) dataset. Finally, the comparisons are drawn among the existing and the proposed models.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Image Interpretation, Computer-Assisted/methods , Radiography, Thoracic , Algorithms , Humans , Neural Networks, Computer , Sensitivity and Specificity
17.
IEEE Trans Med Imaging ; 40(3): 1032-1041, 2021 03.
Article in English | MEDLINE | ID: covidwho-1114979

ABSTRACT

Anomaly detection refers to the identification of cases that do not conform to the expected pattern, which takes a key role in diverse research areas and application domains. Most of existing methods can be summarized as anomaly object detection-based and reconstruction error-based techniques. However, due to the bottleneck of defining encompasses of real-world high-diversity outliers and inaccessible inference process, individually, most of them have not derived groundbreaking progress. To deal with those imperfectness, and motivated by memory-based decision-making and visual attention mechanism as a filter to select environmental information in human vision perceptual system, in this paper, we propose a Multi-scale Attention Memory with hash addressing Autoencoder network (MAMA Net) for anomaly detection. First, to overcome a battery of problems result from the restricted stationary receptive field of convolution operator, we coin the multi-scale global spatial attention block which can be straightforwardly plugged into any networks as sampling, upsampling and downsampling function. On account of its efficient features representation ability, networks can achieve competitive results with only several level blocks. Second, it's observed that traditional autoencoder can only learn an ambiguous model that also reconstructs anomalies "well" due to lack of constraints in training and inference process. To mitigate this challenge, we design a hash addressing memory module that proves abnormalities to produce higher reconstruction error for classification. In addition, we couple the mean square error (MSE) with Wasserstein loss to improve the encoding data distribution. Experiments on various datasets, including two different COVID-19 datasets and one brain MRI (RIDER) dataset prove the robustness and excellent generalization of the proposed MAMA Net.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Neural Networks, Computer , Algorithms , Brain/diagnostic imaging , COVID-19/diagnostic imaging , Humans , Lung/diagnostic imaging , Magnetic Resonance Imaging , SARS-CoV-2 , Tomography, X-Ray Computed
18.
IEEE Trans Neural Netw Learn Syst ; 32(4): 1408-1417, 2021 04.
Article in English | MEDLINE | ID: covidwho-1078912

ABSTRACT

The early and reliable detection of COVID-19 infected patients is essential to prevent and limit its outbreak. The PCR tests for COVID-19 detection are not available in many countries, and also, there are genuine concerns about their reliability and performance. Motivated by these shortcomings, this article proposes a deep uncertainty-aware transfer learning framework for COVID-19 detection using medical images. Four popular convolutional neural networks (CNNs), including VGG16, ResNet50, DenseNet121, and InceptionResNetV2, are first applied to extract deep features from chest X-ray and computed tomography (CT) images. Extracted features are then processed by different machine learning and statistical modeling techniques to identify COVID-19 cases. We also calculate and report the epistemic uncertainty of classification results to identify regions where the trained models are not confident about their decisions (out of distribution problem). Comprehensive simulation results for X-ray and CT image data sets indicate that linear support vector machine and neural network models achieve the best results as measured by accuracy, sensitivity, specificity, and area under the receiver operating characteristic (ROC) curve (AUC). Also, it is found that predictive uncertainty estimates are much higher for CT images compared to X-ray images.


Subject(s)
COVID-19 Testing/methods , COVID-19/diagnosis , Image Interpretation, Computer-Assisted/methods , Transfer, Psychology , Uncertainty , Algorithms , COVID-19/diagnostic imaging , Computer Simulation , Deep Learning , Humans , Machine Learning , Neural Networks, Computer , ROC Curve , Radiography, Thoracic , Reproducibility of Results , Sensitivity and Specificity , Support Vector Machine , Thorax/diagnostic imaging , Tomography, X-Ray Computed
19.
IEEE Trans Ultrason Ferroelectr Freq Control ; 67(11): 2207-2217, 2020 11.
Article in English | MEDLINE | ID: covidwho-978667

ABSTRACT

Recent works highlighted the significant potential of lung ultrasound (LUS) imaging in the management of subjects affected by COVID-19. In general, the development of objective, fast, and accurate automatic methods for LUS data evaluation is still at an early stage. This is particularly true for COVID-19 diagnostic. In this article, we propose an automatic and unsupervised method for the detection and localization of the pleural line in LUS data based on the hidden Markov model and Viterbi Algorithm. The pleural line localization step is followed by a supervised classification procedure based on the support vector machine (SVM). The classifier evaluates the healthiness level of a patient and, if present, the severity of the pathology, i.e., the score value for each image of a given LUS acquisition. The experiments performed on a variety of LUS data acquired in Italian hospitals with both linear and convex probes highlight the effectiveness of the proposed method. The average overall accuracy in detecting the pleura is 84% and 94% for convex and linear probes, respectively. The accuracy of the SVM classification in correctly evaluating the severity of COVID-19 related pleural line alterations is about 88% and 94% for convex and linear probes, respectively. The results as well as the visualization of the detected pleural line and the predicted score chart, provide a significant support to medical staff for further evaluating the patient condition.


Subject(s)
Coronavirus Infections/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Lung/diagnostic imaging , Pleura/diagnostic imaging , Pneumonia, Viral/diagnostic imaging , Ultrasonography/methods , Algorithms , COVID-19 , Humans , Pandemics , Signal Processing, Computer-Assisted , Support Vector Machine
SELECTION OF CITATIONS
SEARCH DETAIL